这篇研究论文提出了COVID-19监测和响应系统,以确定医院患者的数量激增以及关键设备(如东南亚国家的呼吸机),以了解医疗机构的负担。这可以通过资源计划措施来帮助这些地区的当局,以将资源重定向到模型确定的地区。由于缺乏有关医院患者涌入的公开可用数据,或者这些国家可能面临的设备,ICU单元或医院病床的短缺,我们利用Twitter数据来收集此信息。该方法为印度的各州提供了准确的结果,我们正在努力验证其余国家的模型,以便它可以作为当局监控医院负担的可靠工具。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Active learning with strong and weak labelers considers a practical setting where we have access to both costly but accurate strong labelers and inaccurate but cheap predictions provided by weak labelers. We study this problem in the streaming setting, where decisions must be taken \textit{online}. We design a novel algorithmic template, Weak Labeler Active Cover (WL-AC), that is able to robustly leverage the lower quality weak labelers to reduce the query complexity while retaining the desired level of accuracy. Prior active learning algorithms with access to weak labelers learn a difference classifier which predicts where the weak labels differ from strong labelers; this requires the strong assumption of realizability of the difference classifier (Zhang and Chaudhuri,2015). WL-AC bypasses this \textit{realizability} assumption and thus is applicable to many real-world scenarios such as random corrupted weak labels and high dimensional family of difference classifiers (\textit{e.g.,} deep neural nets). Moreover, WL-AC cleverly trades off evaluating the quality with full exploitation of weak labelers, which allows to convert any active learning strategy to one that can leverage weak labelers. We provide an instantiation of this template that achieves the optimal query complexity for any given weak labeler, without knowing its accuracy a-priori. Empirically, we propose an instantiation of the WL-AC template that can be efficiently implemented for large-scale models (\textit{e.g}., deep neural nets) and show its effectiveness on the corrupted-MNIST dataset by significantly reducing the number of labels while keeping the same accuracy as in passive learning.
translated by 谷歌翻译
最近的四型车辆超越了常规设计,更加强调可折叠和可重构的身体。但是,最新的状态仍然着重于此类设计的机械可行性,在配置切换过程中有关车辆的跟踪性能的讨论有限。在本文中,我们提出了一个完整的控制和计划框架,用于在配置切换过程中进行态度跟踪并遏制任何基于开关的干扰,这可能导致违反安全限制并导致崩溃。控制框架包括一个具有估计器的形态感知自适应控制器,以说明参数变化和最小值轨迹计划器,以在切换时实现稳定的飞行。态度跟踪的稳定性分析是通过采用开关系统理论和仿真结果来验证了拟议的框架,该框架是通过通道通过通道的可折叠四极管飞行的框架。
translated by 谷歌翻译
当NLP模型从一个时间段进行文本数据培训并从另一个时间进行测试或部署或部署时,产生的时间未对准可能会降低结束任务性能。在这项工作中,我们在不同域名(社交媒体,科学论文,新闻和评论和评论)和时间(跨越五年或更长时间)的时间内建立了八个不同的任务套件,以量化时间未对准的影响。我们的研究专注于普遍存在的环境,其中佩戴的模型可选择通过持续的域特异性预测来改编,然后是特定于任务的FineTuning。我们在多个域中建立了一套任务,以研究现代NLP系统中的时间错位。我们发现对任务性能的时间不对准而不是先前报告的更强烈影响。我们还发现,虽然通过续预先训练的时间适应可以帮助,但与目标时间段中的数据上的任务特定的FineTuning相比,这些收益很小。我们的研究结果激励了提高NLP模型的时间稳健性的持续研究。
translated by 谷歌翻译